15 research outputs found
Universal Quantum Speedup for Branch-and-Bound, Branch-and-Cut, and Tree-Search Algorithms
Mixed Integer Programs (MIPs) model many optimization problems of interest in
Computer Science, Operations Research, and Financial Engineering. Solving MIPs
is NP-Hard in general, but several solvers have found success in obtaining
near-optimal solutions for problems of intermediate size. Branch-and-Cut
algorithms, which combine Branch-and-Bound logic with cutting-plane routines,
are at the core of modern MIP solvers. Montanaro proposed a quantum algorithm
with a near-quadratic speedup compared to classical Branch-and-Bound algorithms
in the worst case, when every optimal solution is desired. In practice,
however, a near-optimal solution is satisfactory, and by leveraging tree-search
heuristics to search only a portion of the solution tree, classical algorithms
can perform much better than the worst-case guarantee. In this paper, we
propose a quantum algorithm, Incremental-Quantum-Branch-and-Bound, with
universal near-quadratic speedup over classical Branch-and-Bound algorithms for
every input, i.e., if classical Branch-and-Bound has complexity on an
instance that leads to solution depth , Incremental-Quantum-Branch-and-Bound
offers the same guarantees with a complexity of . Our
results are valid for a wide variety of search heuristics, including
depth-based, cost-based, and heuristics. Universal speedups are also
obtained for Branch-and-Cut as well as heuristic tree search. Our algorithms
are directly comparable to commercial MIP solvers, and guarantee near quadratic
speedup whenever . We use numerical simulation to verify that for typical instances of the Sherrington-Kirkpatrick model, Maximum
Independent Set, and Portfolio Optimization; as well as to extrapolate the
dependence of on input size parameters. This allows us to project the
typical performance of our quantum algorithms for these important problems.Comment: 25 pages, 5 figure
Analyzing Convergence in Quantum Neural Networks: Deviations from Neural Tangent Kernels
A quantum neural network (QNN) is a parameterized mapping efficiently
implementable on near-term Noisy Intermediate-Scale Quantum (NISQ) computers.
It can be used for supervised learning when combined with classical
gradient-based optimizers. Despite the existing empirical and theoretical
investigations, the convergence of QNN training is not fully understood.
Inspired by the success of the neural tangent kernels (NTKs) in probing into
the dynamics of classical neural networks, a recent line of works proposes to
study over-parameterized QNNs by examining a quantum version of tangent
kernels. In this work, we study the dynamics of QNNs and show that contrary to
popular belief it is qualitatively different from that of any kernel
regression: due to the unitarity of quantum operations, there is a
non-negligible deviation from the tangent kernel regression derived at the
random initialization. As a result of the deviation, we prove the at-most
sublinear convergence for QNNs with Pauli measurements, which is beyond the
explanatory power of any kernel regression dynamics. We then present the actual
dynamics of QNNs in the limit of over-parameterization. The new dynamics
capture the change of convergence rate during training and implies that the
range of measurements is crucial to the fast QNN convergence
Sublinear classical and quantum algorithms for general matrix games
We investigate sublinear classical and quantum algorithms for matrix games, a
fundamental problem in optimization and machine learning, with provable
guarantees. Given a matrix , sublinear algorithms
for the matrix game
were previously known only for two special cases: (1) being the
-norm unit ball, and (2) being either the -
or the -norm unit ball. We give a sublinear classical algorithm that
can interpolate smoothly between these two cases: for any fixed ,
we solve the matrix game where is a -norm unit ball
within additive error in time . We
also provide a corresponding sublinear quantum algorithm that solves the same
task in time with a
quadratic improvement in both and . Both our classical and quantum
algorithms are optimal in the dimension parameters and up to
poly-logarithmic factors. Finally, we propose sublinear classical and quantum
algorithms for the approximate Carath\'eodory problem and the -margin
support vector machines as applications.Comment: 16 pages, 2 figures. To appear in the Thirty-Fifth AAAI Conference on
Artificial Intelligence (AAAI 2021
Parameter Setting in Quantum Approximate Optimization of Weighted Problems
Quantum Approximate Optimization Algorithm (QAOA) is a leading candidate
algorithm for solving combinatorial optimization problems on quantum computers.
However, in many cases QAOA requires computationally intensive parameter
optimization. The challenge of parameter optimization is particularly acute in
the case of weighted problems, for which the eigenvalues of the phase operator
are non-integer and the QAOA energy landscape is not periodic. In this work, we
develop parameter setting heuristics for QAOA applied to a general class of
weighted problems. First, we derive optimal parameters for QAOA with depth
applied to the weighted MaxCut problem under different assumptions on the
weights. In particular, we rigorously prove the conventional wisdom that in the
average case the first local optimum near zero gives globally-optimal QAOA
parameters. Second, for we prove that the QAOA energy landscape for
weighted MaxCut approaches that for the unweighted case under a simple
rescaling of parameters. Therefore, we can use parameters previously obtained
for unweighted MaxCut for weighted problems. Finally, we prove that for
the QAOA objective sharply concentrates around its expectation, which means
that our parameter setting rules hold with high probability for a random
weighted instance. We numerically validate this approach on general weighted
graphs and show that on average the QAOA energy with the proposed fixed
parameters is only percentage points away from that with optimized
parameters. Third, we propose a general heuristic rescaling scheme inspired by
the analytical results for weighted MaxCut and demonstrate its effectiveness
using QAOA with the XY Hamming-weight-preserving mixer applied to the portfolio
optimization problem. Our heuristic improves the convergence of local
optimizers, reducing the number of iterations by 7.2x on average
Quantum algorithm for estimating volumes of convex bodies
Estimating the volume of a convex body is a central problem in convex
geometry and can be viewed as a continuous version of counting. We present a
quantum algorithm that estimates the volume of an -dimensional convex body
within multiplicative error using
queries to a membership oracle and
additional arithmetic operations. For
comparison, the best known classical algorithm uses
queries and
additional arithmetic operations. To the
best of our knowledge, this is the first quantum speedup for volume estimation.
Our algorithm is based on a refined framework for speeding up simulated
annealing algorithms that might be of independent interest. This framework
applies in the setting of "Chebyshev cooling", where the solution is expressed
as a telescoping product of ratios, each having bounded variance. We develop
several novel techniques when implementing our framework, including a theory of
continuous-space quantum walks with rigorous bounds on discretization error. To
complement our quantum algorithms, we also prove that volume estimation
requires quantum membership queries, which rules
out the possibility of exponential quantum speedup in and shows optimality
of our algorithm in up to poly-logarithmic factors.Comment: 61 pages, 8 figures. v2: Quantum query complexity improved to
and number of additional arithmetic
operations improved to . v3: Improved
Section 4.3.3 on nondestructive mean estimation and Section 6 on quantum
lower bounds; various minor change
Parameter Setting in Quantum Approximate Optimization of Weighted Problems
Quantum Approximate Optimization Algorithm (QAOA) is a leading candidate algorithm for solving combinatorial optimization problems on quantum computers. However, in many cases QAOA requires computationally intensive parameter optimization. The challenge of parameter optimization is particularly acute in the case of weighted problems, for which the eigenvalues of the phase operator are non-integer and the QAOA energy landscape is not periodic. In this work, we develop parameter setting heuristics for QAOA applied to a general class of weighted problems. First, we derive optimal parameters for QAOA with depth applied to the weighted MaxCut problem under different assumptions on the weights. In particular, we rigorously prove the conventional wisdom that in the average case the first local optimum near zero gives globally-optimal QAOA parameters. Second, for we prove that the QAOA energy landscape for weighted MaxCut approaches that for the unweighted case under a simple rescaling of parameters. Therefore, we can use parameters previously obtained for unweighted MaxCut for weighted problems. Finally, we prove that for the QAOA objective sharply concentrates around its expectation, which means that our parameter setting rules hold with high probability for a random weighted instance. We numerically validate this approach on general weighted graphs and show that on average the QAOA energy with the proposed fixed parameters is only percentage points away from that with optimized parameters. Third, we propose a general heuristic rescaling scheme inspired by the analytical results for weighted MaxCut and demonstrate its effectiveness using QAOA with the XY Hamming-weight-preserving mixer applied to the portfolio optimization problem. Our heuristic improves the convergence of local optimizers, reducing the number of iterations by 7.4x on average